DRAFT OF July 20 , 1995 FOR IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Gradient Calculations for Dynamic

نویسنده

  • Barak A. Pearlmutter
چکیده

| We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non-xedpoint algorithms , namely backpropagation through time, Elman's history cutoo, and Jordan's output feedback architecture. Forward propagation, an online technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the uniied presentation leads to generalizations of various sorts. We discuss advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones, continue with some \tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. We present some simulations, and at the end, address issues of computational complexity and learning speed. Keywords| Recurrent neural networks, backpropagation through time, real time recurrent learning, trajectory learning .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient calculations for dynamic recurrent neural networks: a survey

Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forwa...

متن کامل

Symmetry constraints for feedforward network models of gradient systems

This paper concerns the use of a priori information on the symmetry of cross differentials available for problems that seek to approximate the gradient of a differentiable function. We derive the appropriate network constraints to incorporate the symmetry information, show that the constraints do not reduce the universal approximation capabilities of feedforward networks, and demonstrate how th...

متن کامل

Some new results on system identification with dynamic neural networks

Nonlinear system online identification via dynamic neural networks is studied in this paper. The main contribution of the paper is that the passivity approach is applied to access several new stable properties of neuro identification. The conditions for passivity, stability, asymptotic stability, and input-to-state stability are established in certain senses. We conclude that the gradient desce...

متن کامل

Learning rules for neuro-controller via simultaneous perturbation

This paper describes learning rules using simultaneous perturbation for a neurocontroller that controls an unknown plant. When we apply a direct control scheme by a neural network, the neural network must learn an inverse system of the unknown plant. In this case, we must know the sensitivity function of the plant using a kind of the gradient method as a learning rule of the neural network. On ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995